Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 78
Filter
1.
Article in English | MEDLINE | ID: mdl-38082885

ABSTRACT

Block-design is a popular experimental paradigm for functional near-infrared spectroscopy (fNIRS). Traditional block-design analysis techniques such as generalized linear modeling (GLM) and waveform averaging (WA) assume that the brain is a time-invariant system. This is a flawed assumption. In this paper, we propose a parametric Gaussian model to quantify the time-variant behavior found across consecutive trials of block-design fNIRS experiments. Using simulated data at different signal-to-noise ratios (SNRs), we demonstrate that our proposed technique is capable of characterizing Gaussian-like fNIRS signal features with ≥3dB SNR. When used to fit recorded data from an auditory block-design experiment, model parameter values quantitatively revealed statistically significant changes in fNIRS responses across trials, consistent with visual inspection of data from individual trials. Our results suggest that our model effectively captures trial-to-trial differences in response, which enables researchers to study time-variant brain responses using block-design fNIRS experiments.


Subject(s)
Brain , Spectroscopy, Near-Infrared , Spectroscopy, Near-Infrared/methods , Brain/diagnostic imaging , Brain/physiology , Linear Models
2.
Article in English | MEDLINE | ID: mdl-38083703

ABSTRACT

Resting-state functional connectivity is a promising tool for understanding and characterizing brain network architecture. However, obtaining uninterrupted long recording of resting-state data is challenging in many clinically relevant populations. Moreover, the interpretation of connectivity results may heavily depend on the data length and functional connectivity measure used. We compared the performance of three frequency-domain connectivity measures: magnitude-squared, wavelet and multitaper coherence; and the effect of data length ranging from 3 to 9 minutes. Performance was characterized by distinguishing two groups of channel pairs with known different connectivity strengths. While all methods considered improved the ability to distinguish the two groups with increasing data lengths, wavelet coherence performed best for the shortest time window of 3 minutes. Knowledge of which measure is more reliably used when shorter fNIRS recordings are available could make the utility of functional connectivity biomarkers more feasible in clinical populations of interest.


Subject(s)
Brain Mapping , Brain , Brain/diagnostic imaging , Brain Mapping/methods , Spectrum Analysis
3.
Article in English | MEDLINE | ID: mdl-38083712

ABSTRACT

Many studies on morphology analysis show that if short inter-stimulus intervals separate tasks, the hemodynamic response amplitude will return to the resting-state baseline before the subsequent stimulation onset; hence, responses to successive tasks do not overlap. Accordingly, popular brain imaging analysis techniques assume changes in hemodynamic response amplitude subside after a short time (around 15 seconds). However, whether this assumption holds when studying brain functional connectivity has yet to be investigated. This paper assesses whether or not the functional connectivity network in control trials returns to the resting-state functional connectivity network. Traditionally, control trials in block-design experiments are used to evaluate response morphology to no stimulus. We analyzed data from an event-related experiment with audio and visual stimuli and resting state. Our results showed that functional connectivity networks during control trials were more similar to that of tasks than resting-state networks. In other words, contrary to task-related changes in the hemodynamic amplitude, where responses settle after a short time, the brain's functional connectivity networks do not return to their intrinsic resting-state network in such short intervals.


Subject(s)
Magnetic Resonance Imaging , Nerve Net , Magnetic Resonance Imaging/methods , Nerve Net/diagnostic imaging , Nerve Net/physiology , Rest/physiology , Brain/diagnostic imaging , Brain/physiology , Neuroimaging
4.
J Neural Eng ; 20(1)2023 02 24.
Article in English | MEDLINE | ID: mdl-36763991

ABSTRACT

Objective.Hearing is an important sensory function that plays a key role in how children learn to speak and develop language skills. Although previous neuroimaging studies have established that much of brain network maturation happens in early childhood, our understanding of the developmental trajectory of language areas is still very limited. We hypothesized that typical development trajectory of language areas in early childhood could be established by analyzing the changes of functional connectivity in normal hearing infants at different ages using functional near-infrared spectroscopy.Approach.Resting-state data were recorded from two bilateral temporal and prefrontal regions associated with language processing by measuring the relative changes of oxy-hemoglobin (HbO) and deoxy-hemoglobin (HbR) concentrations. Connectivity was calculated using magnitude-squared coherence of channel pairs located in (a) inter-hemispheric homologous and (b) intra-hemispheric brain regions to assess connectivity between homologous regions across hemispheres and two regions of interest in the same hemisphere, respectively.Main results.A linear regression model fitted to the age vs coherence of inter-hemispheric homologous test group revealed a significant coefficient of determination for both HbO (R2= 0.216,p= 0.0169) and HbR (R2= 0.206,p= 0.0198). A significant coefficient of determination was also found for intra-hemispheric test group for HbO (R2= 0.237,p= 0.0117) but not for HbR (R2= 0.111,p= 0.0956).Significance.The findings from HbO data suggest that both inter-hemispheric homologous and intra-hemispheric connectivity between primary language regions significantly strengthen with age in the first year of life. Mapping out the developmental trajectory of primary language areas of normal hearing infants as measured by functional connectivity could potentially allow us to better understand the altered connectivity and its effects on language delays in infants with hearing impairments.


Subject(s)
Brain , Spectroscopy, Near-Infrared , Child , Humans , Infant , Child, Preschool , Spectroscopy, Near-Infrared/methods , Brain/metabolism , Brain Mapping/methods , Language , Hemoglobins , Magnetic Resonance Imaging
5.
Ear Hear ; 44(4): 776-786, 2023.
Article in English | MEDLINE | ID: mdl-36706073

ABSTRACT

OBJECTIVES: Cardiac responses (e.g., heart rate changes) due to an autonomous response to sensory stimuli have been reported in several studies. This study investigated whether heart rate information extracted from functional near-infrared spectroscopy (fNIRS) data can be used to assess the discrimination of speech sounds in sleeping infants. This study also investigated the adaptation of the heart rate response over multiple, sequential stimulus presentations. DESIGN: fNIRS data were recorded from 23 infants with no known hearing loss, aged 2 to 10 months. Speech syllables were presented using a habituation/dishabituation test paradigm: the infant's heart rate response was first habituated by repeating blocks of one speech sound; then, the heart rate response was dishabituated with the contrasting (novel) speech sound. This stimulus presentation sequence was repeated for as long as the infants were asleep. RESULTS: The group-level average heart rate response to the novel stimulus was greater than that to the habituated first sound, indicating that sleeping infants were able to discriminate the speech sound contrast. A significant adaptation of the heart rate responses was seen over the session duration. CONCLUSION: The dishabituation response could be a valuable marker for speech discrimination, especially when used in conjunction with the fNIRS hemodynamic response.


Subject(s)
Deafness , Speech Perception , Humans , Infant , Speech Perception/physiology , Heart Rate , Spectroscopy, Near-Infrared , Speech
6.
iScience ; 25(8): 104737, 2022 Aug 19.
Article in English | MEDLINE | ID: mdl-35938045

ABSTRACT

Sensory deprivation causes structural and functional changes in the human brain. Cochlear implantation delivers immediate reintroduction of auditory sensory information. Previous reports have indicated that over a year is required for the brain to reestablish canonical cortical processing patterns after the reintroduction of auditory stimulation. We utilized functional near-infrared spectroscopy (fNIRS) to investigate brain activity to natural speech stimuli directly after cochlear implantation. We presented 12 cochlear implant recipients, who each had a minimum of 12 months of auditory deprivation, with unilateral auditory- and visual-speech stimuli. Regardless of the side of implantation, canonical responses were elicited primarily on the contralateral side of stimulation as early as 1 h after device activation. These data indicate that auditory pathway connections are sustained during periods of sensory deprivation in adults, and that typical cortical lateralization is observed immediately following the reintroduction of auditory sensory input.

7.
PLoS One ; 17(4): e0267588, 2022.
Article in English | MEDLINE | ID: mdl-35468160

ABSTRACT

The present study aimed to investigate the effects of degraded speech perception and binaural unmasking using functional near-infrared spectroscopy (fNIRS). Normal hearing listeners were tested when attending to unprocessed or vocoded speech, presented to the left ear at two speech-to-noise ratios (SNRs). Additionally, by comparing monaural versus diotic masker noise, we measured binaural unmasking. Our primary research question was whether the prefrontal cortex and temporal cortex responded differently to varying listening configurations. Our a priori regions of interest (ROIs) were located at the left dorsolateral prefrontal cortex (DLPFC) and auditory cortex (AC). The left DLPFC has been reported to be involved in attentional processes when listening to degraded speech and in spatial hearing processing, while the AC has been reported to be sensitive to speech intelligibility. Comparisons of cortical activity between these two ROIs revealed significantly different fNIRS response patterns. Further, we showed a significant and positive correlation between self-reported task difficulty levels and fNIRS responses in the DLPFC, with a negative but non-significant correlation for the left AC, suggesting that the two ROIs played different roles in effortful speech perception. Our secondary question was whether activity within three sub-regions of the lateral PFC (LPFC) including the DLPFC was differentially affected by varying speech-noise configurations. We found significant effects of spectral degradation and SNR, and significant differences in fNIRS response amplitudes between the three regions, but no significant interaction between ROI and speech type, or between ROI and SNR. When attending to speech with monaural and diotic noises, participants reported the latter conditions being easier; however, no significant main effect of masker condition on cortical activity was observed. For cortical responses in the LPFC, a significant interaction between SNR and masker condition was observed. These findings suggest that binaural unmasking affects cortical activity through improving speech reception threshold in noise, rather than by reducing effort exerted.


Subject(s)
Spectroscopy, Near-Infrared , Speech Perception , Acoustic Stimulation/methods , Humans , Noise , Speech Intelligibility , Speech Perception/physiology
8.
Neurophotonics ; 9(1): 015001, 2022 Jan.
Article in English | MEDLINE | ID: mdl-35071689

ABSTRACT

Significance: Functional near-infrared spectroscopy (fNIRS) is a neuroimaging tool that can measure resting-state functional connectivity; however, non-neuronal components present in fNIRS signals introduce false discoveries in connectivity, which can impact interpretation of functional networks. Aim: We investigated the effect of short channel correction on resting-state connectivity by removing non-neuronal signals from fNIRS long channel data. We hypothesized that false discoveries in connectivity can be reduced, hence improving the discriminability of functional networks of known, different connectivity strengths. Approach: A principal component analysis-based short channel correction technique was applied to resting-state data of 10 healthy adult subjects. Connectivity was analyzed using magnitude-squared coherence of channel pairs in connectivity groups of homologous and control brain regions, which are known to differ in connectivity. Results: By removing non-neuronal components using short channel correction, significant reduction of coherence was observed for oxy-hemoglobin concentration changes in frequency bands associated with resting-state connectivity that overlap with the Mayer wave frequencies. The results showed that short channel correction reduced spurious correlations in connectivity measures and improved the discriminability between homologous and control groups. Conclusions: Resting-state functional connectivity analysis with short channel correction performs better than without correction in its ability to distinguish functional networks with distinct connectivity characteristics.

9.
Int J Audiol ; 61(2): 166-172, 2022 Feb.
Article in English | MEDLINE | ID: mdl-34106802

ABSTRACT

OBJECTIVE: To develop and validate an Australian version of a behavioural test for assessing listening task difficulty at high speech intelligibility levels. DESIGN: In the SWIR-Aus test, listeners perform two tasks: identify the last word of each of seven sentences in a list and recall the identified words after each list. First, the test material was developed by creating seven-sentence lists with similar final-word features. Then, for the validation, participant's performance on the SWIR-Aus test was compared when a binary mask noise reduction algorithm was on and off. STUDY SAMPLE: All participants in this study had normal hearing thresholds. Nine participants (23.8-56.0 years) participated in the characterisation of the speech material. Another thirteen participants (18.4-59.1 years) participated in a pilot test to determine the SNR to use at the validation stage. Finally, twenty-four new participants (20.0-56.9 years) participated in the validation of the test. RESULTS: The results of the validation of the test showed that recall and identification scores were significantly better when the binary mask noise reduction algorithm was on compared to off. CONCLUSIONS: The SWIR-Aus test was developed using Australian speech material and can be used for assessing task difficulty at high speech intelligibility levels.


Subject(s)
Speech Intelligibility , Speech Perception , Auditory Perception , Australia , Humans , Noise/adverse effects
10.
Sci Rep ; 11(1): 24006, 2021 12 14.
Article in English | MEDLINE | ID: mdl-34907273

ABSTRACT

Speech detection and discrimination ability are important measures of hearing ability that may inform crucial audiological intervention decisions for individuals with a hearing impairment. However, behavioral assessment of speech discrimination can be difficult and inaccurate in infants, prompting the need for an objective measure of speech detection and discrimination ability. In this study, the authors used functional near-infrared spectroscopy (fNIRS) as the objective measure. Twenty-three infants, 2 to 10 months of age participated, all of whom had passed newborn hearing screening or diagnostic audiology testing. They were presented with speech tokens at a comfortable listening level in a natural sleep state using a habituation/dishabituation paradigm. The authors hypothesized that fNIRS responses to speech token detection as well as speech token contrast discrimination could be measured in individual infants. The authors found significant fNIRS responses to speech detection in 87% of tested infants (false positive rate 0%), as well as to speech discrimination in 35% of tested infants (false positive rate 9%). The results show initial promise for the use of fNIRS as an objective clinical tool for measuring infant speech detection and discrimination ability; the authors highlight the further optimizations of test procedures and analysis techniques that would be required to improve accuracy and reliability to levels needed for clinical decision-making.


Subject(s)
Acoustic Stimulation , Spectroscopy, Near-Infrared , Speech Perception/physiology , Speech/physiology , Female , Humans , Infant , Male
11.
Hear Res ; 406: 108256, 2021 07.
Article in English | MEDLINE | ID: mdl-34051607

ABSTRACT

As an alternative to fMRI, functional near-infrared spectroscopy (fNIRS) is a relatively new tool for observing cortical activation. However, spatial resolution is reduced compared to fMRI and often the exact locations of fNIRS optodes and specific anatomical information is not known. The aim of this study was to explore the location and range of specific regions of interest that are sensitive to detecting cortical activation using fNIRS in response to auditory- and visual-only connected speech. Two approaches to a priori region-of-interest selection were explored. First, broad regions corresponding to the auditory cortex and occipital lobe were analysed. Next, the fNIRS Optode Location Decider (fOLD) tool was used to divide the auditory and visual regions into two subregions corresponding to distinct anatomical structures. The Auditory-A and -B regions corresponded to Heschl's gyrus and planum temporale, respectively. The Visual-A region corresponded to the superior occipital gyrus and the cuneus, and the Visual-B region corresponded to the middle occipital gyrus. The experimental stimulus consisted of a connected speech signal segmented into 12.5-sec blocks and was presented in either an auditory-only or visual-only condition. Group-level results for eight normal-hearing adult participants averaged over the broad regions of interest revealed significant auditory-evoked activation for both the left and right broad auditory regions of interest. No significant activity was observed for any other broad region of interest in response to any stimulus condition. When divided into subregions, there was a significant positive auditory-evoked response in the left and right Auditory-A regions, suggesting activation near the primary auditory cortex in response to auditory-only speech. There was a significant positive visual-evoked response in the Visual-B region, suggesting middle occipital gyrus activation in response to visual-only speech. In the Visual-A region, however, there was a significant negative visual-evoked response. This result suggests a significant decrease in oxygenated hemoglobin in the superior occipital gyrus as well as the cuneus in response to visual-only speech. Distinct response characteristics, either positive or negative, in adjacent subregions within the temporal and occipital lobes were fairly consistent on the individual level. Results suggest that temporal regions near Heschl's gyrus may be the most advantageous location in adults for identifying hemodynamic responses to complex auditory speech signals using fNIRS. In the occipital lobe, regions corresponding to the facial processing pathway may prove advantageous for measuring positive responses to visual speech using fNIRS.


Subject(s)
Auditory Cortex , Spectroscopy, Near-Infrared , Speech Perception , Acoustic Stimulation , Adult , Auditory Cortex/diagnostic imaging , Brain Mapping , Humans , Magnetic Resonance Imaging , Speech
12.
J Neural Eng ; 18(4)2021 06 04.
Article in English | MEDLINE | ID: mdl-34010826

ABSTRACT

Objective. Stimulus-elicited changes in electroencephalography (EEG) recordings can be represented using Fourier magnitude and phase features (Makeiget al(2004Trends Cogn. Sci.8204-10)). The present study aimed to quantify how much information about hearing responses are contained in the magnitude, quantified by event-related spectral perturbations (ERSPs); and the phase, quantified by inter-trial coherence (ITC). By testing if one feature contained more information and whether this information was mutually exclusive to the features, we aimed to relate specific EEG magnitude and phase features to hearing perception.Approach.EEG responses were recorded from 20 adults who were presented with acoustic stimuli, and 20 adult cochlear implant users with electrical stimuli. Both groups were presented with short, 50 ms stimuli at varying intensity levels relative to their hearing thresholds. Extracted ERSP and ITC features were inputs for a linear discriminant analysis classifier (Wonget al(2016J. Neural. Eng.13036003)). The classifier then predicted whether the EEG signal contained information about the sound stimuli based on the input features. Classifier decoding accuracy was quantified with the mutual information measure (Cottaris and Elfar (2009J. Neural. Eng.6026007), Hawelleket al(2016Proc. Natl Acad. Sci.11313492-7)), and compared across the two feature sets, and to when both feature sets were combined.Main results. We found that classifiers using either ITC or ERSP feature sets were both able to decode hearing perception, but ITC-feature classifiers were able to decode responses to a lower but still audible stimulation intensity, making ITC more useful than ERSP for hearing threshold estimation. We also found that combining the information from both feature sets did not improve decoding significantly, implying that ERSP brain dynamics has a limited contribution to the EEG response, possibly due to the stimuli used in this study.Significance.We successfully related hearing perception to an EEG measure, which does not require behavioral feedback from the listener; an objective measure is important in both neuroscience research and clinical audiology.


Subject(s)
Cochlear Implants , Evoked Potentials, Auditory , Acoustic Stimulation , Acoustics , Auditory Threshold , Electroencephalography , Hearing
13.
Trends Hear ; 25: 2331216520985678, 2021.
Article in English | MEDLINE | ID: mdl-33634750

ABSTRACT

As musicians have been shown to have a range of superior auditory skills to non-musicians (e.g., pitch discrimination ability), it has been hypothesized by many researchers that music training can have a beneficial effect on speech perception in populations with hearing impairment. This hypothesis relies on an assumption that the benefits seen in musicians are due to their training and not due to innate skills that may support successful musicianship. This systematic review examined the evidence from 13 longitudinal training studies that tested the hypothesis that music training has a causal effect on speech perception ability in hearing-impaired listeners. The papers were evaluated for quality of research design and appropriate analysis techniques. Only 4 of the 13 papers used a research design that allowed a causal relation between music training and outcome benefits to be validly tested, and none of those 4 papers with a better quality study design demonstrated a benefit of music training for speech perception. In spite of the lack of valid evidence in support of the hypothesis, 10 of the 13 papers made claims of benefits of music training, showing a propensity for confirmation bias in this area of research. It is recommended that future studies that aim to evaluate the association of speech perception ability and music training use a study design that differentiates the effects of training from those of innate perceptual and cognitive skills in the participants.


Subject(s)
Hearing Loss , Music , Speech Perception , Hearing , Hearing Loss/diagnosis , Hearing Loss/therapy , Humans , Pitch Discrimination
14.
J Assoc Res Otolaryngol ; 22(1): 81-94, 2021 02.
Article in English | MEDLINE | ID: mdl-33108575

ABSTRACT

Variations in the condition of the neural population along the length of the cochlea can degrade the spectral and temporal representation of sounds conveyed by CIs, thereby limiting speech perception. One measurement that has been proposed as an estimate of neural survival (the number of remaining functional neurons) or neural health (the health of those remaining neurons) is the effect of stimulation parameters, such as the interphase gap (IPG), on the amplitude growth function (AGF) of the electrically evoked compound action potential (ECAP). The extent to which such measures reflect neural factors, rather than non-neural factors (e.g. electrode orientation, electrode-modiolus distance, and impedance), depends crucially upon how the AGF data are analysed. However, there is currently no consensus in the literature for the correct method to interpret changes in the ECAP AGF due to changes in stimulation parameters. We present a simple theoretical model for the effect of IPG on ECAP AGFs, along with a re-analysis of both animal and human data that measured the IPG effect. Both the theoretical model and the re-analysis of the animal data suggest that the IPG effect on ECAP AGF slope (IPG slope effect), measured using either a linear or logarithmic input-output scale, does not successfully control for the effects of non-neural factors. Both the model and the data suggest that the appropriate method to estimate neural health is by measuring the IPG offset effect, defined as the dB offset between the linear portions of ECAP AGFs for two stimuli differing only in IPG.


Subject(s)
Action Potentials , Cochlear Nerve , Electric Stimulation , Evoked Potentials , Cochlear Implants , Evoked Potentials, Auditory , Humans
15.
PLoS One ; 15(12): e0244186, 2020.
Article in English | MEDLINE | ID: mdl-33362260

ABSTRACT

Functional near-infrared spectroscopy (fNIRS) is a non-invasive technique used to measure changes in oxygenated (HbO) and deoxygenated (HbR) hemoglobin, related to neuronal activity. fNIRS signals are contaminated by the systemic responses in the extracerebral tissue (superficial layer) of the head, as fNIRS uses a back-reflection measurement. Using shorter channels that are only sensitive to responses in the extracerebral tissue but not in the deeper layers where target neuronal activity occurs has been a 'gold standard' to reduce the systemic responses in the fNIRS data from adults. When shorter channels are not available or feasible for implementation, an alternative, i.e., anti-correlation (Anti-Corr) method has been adopted. To date, there has not been a study that directly assesses the outcomes from the two approaches. In this study, we compared the Anti-Corr method with the 'gold standard' in reducing systemic responses to improve fNIRS neural signal qualities. We used eight short channels (8-mm) in a group of adults, and conducted a principal component analysis (PCA) to extract two components that contributed the most to responses in the 8 short channels, which were assumed to contain the global components in the extracerebral tissue. We then used a general linear model (GLM), with and without including event-related regressors, to regress out the 2 principal components from regular fNIRS channels (30 mm), i.e., two GLM-PCA methods. Our results found that, the two GLM-PCA methods showed similar performance, both GLM-PCA methods and the Anti-Corr method improved fNIRS signal qualities, and the two GLM-PCA methods had better performance than the Anti-Corr method.


Subject(s)
Brain/diagnostic imaging , Spectroscopy, Near-Infrared/methods , Female , Hemoglobins/metabolism , Humans , Male , Principal Component Analysis , Sensitivity and Specificity , Spectroscopy, Near-Infrared/standards , Young Adult
16.
PLoS One ; 15(11): e0241695, 2020.
Article in English | MEDLINE | ID: mdl-33206675

ABSTRACT

Chronic tinnitus is a debilitating condition which affects 10-20% of adults and can severely impact their quality of life. Currently there is no objective measure of tinnitus that can be used clinically. Clinical assessment of the condition uses subjective feedback from individuals which is not always reliable. We investigated the sensitivity of functional near-infrared spectroscopy (fNIRS) to differentiate individuals with and without tinnitus and to identify fNIRS features associated with subjective ratings of tinnitus severity. We recorded fNIRS signals in the resting state and in response to auditory or visual stimuli from 25 individuals with chronic tinnitus and 21 controls matched for age and hearing loss. Severity of tinnitus was rated using the Tinnitus Handicap Inventory and subjective ratings of tinnitus loudness and annoyance were measured on a visual analogue scale. Following statistical group comparisons, machine learning methods including feature extraction and classification were applied to the fNIRS features to classify patients with tinnitus and controls and differentiate tinnitus at different severity levels. Resting state measures of connectivity between temporal regions and frontal and occipital regions were significantly higher in patients with tinnitus compared to controls. In the tinnitus group, temporal-occipital connectivity showed a significant increase with subject ratings of loudness. Also in this group, both visual and auditory evoked responses were significantly reduced in the visual and auditory regions of interest respectively. Naïve Bayes classifiers were able to classify patients with tinnitus from controls with an accuracy of 78.3%. An accuracy of 87.32% was achieved using Neural Networks to differentiate patients with slight/ mild versus moderate/ severe tinnitus. Our findings show the feasibility of using fNIRS and machine learning to develop an objective measure of tinnitus. Such a measure would greatly benefit clinicians and patients by providing a tool to objectively assess new treatments and patients' treatment progress.


Subject(s)
Machine Learning , Spectroscopy, Near-Infrared/methods , Adult , Aged , Evoked Potentials, Auditory/physiology , Female , Humans , Male , Middle Aged , Photic Stimulation , Tinnitus/physiopathology
17.
Hear Res ; 398: 108091, 2020 12.
Article in English | MEDLINE | ID: mdl-33059310

ABSTRACT

Cochlear Implant (CI) sound coding strategies based on simultaneous stimulation lead to an increased loudness percept when compared to sequential stimulation using the same current levels. This is due to loudness summation as a result of channel interactions. Studying the loudness perception evoked by dual-channels compared to single-channels can be useful to optimize sound coding strategies that use simultaneous current pulses. Fourteen users of HiRes90k implants and one user of a CII implant loudness balanced single-channel to dual-channel stimuli with varying distance between simultaneous channels. In this study each component of a dual channel was a virtual channel, which shared current across two adjacent electrodes. Balancing was performed at threshold and comfortable level, for two spatial references (apical and basal) and for dual-channels with different relative current ratios. Increasing distance between dual-channels decreased the amount of current compensation in the dual-channel required to reach equal loudness to a single channel component by an average of 0.24 dB / mm without a significant difference between threshold and most comfortable level. If the components of the dual-channels were not at equal loudness, the loudness summation was reduced with respect to the equal loudness case. The results were incorporated into an existing loudness model by McKay et al. (2003). The predictions from the adapted model were evaluated by comparing the loudness evoked by simultaneous and sequential sound coding strategies. The application of the adapted model resulted in a deviation between predicted and actual behavioral loudness balancing adjustments in electrical level between simultaneous and sequential processing strategies of 0.24 dB on average.


Subject(s)
Electric Stimulation , Sound , Cochlear Implantation , Cochlear Implants , Loudness Perception
18.
Front Psychol ; 11: 611517, 2020.
Article in English | MEDLINE | ID: mdl-33519626

ABSTRACT

Cochlear implants electrically stimulate surviving auditory neurons in the cochlea to provide severely or profoundly deaf people with access to hearing. Signal processing strategies derive frequency-specific information from the acoustic signal and code amplitude changes in frequency bands onto amplitude changes of current pulses emitted by the tonotopically arranged intracochlear electrodes. This article first describes how parameters of the electrical stimulation influence the loudness evoked and then summarizes two different phenomenological models developed by McKay and colleagues that have been used to explain psychophysical effects of stimulus parameters on loudness, detection, and modulation detection. The Temporal Model is applied to single-electrode stimuli and integrates cochlear neural excitation using a central temporal integration window analogous to that used in models of normal hearing. Perceptual decisions are made using decision criteria applied to the output of the integrator. By fitting the model parameters to a variety of psychophysical data, inferences can be made about how electrical stimulus parameters influence neural excitation in the cochlea. The Detailed Model is applied to multi-electrode stimuli, and includes effects of electrode interaction at a cochlear level and a transform between integrated excitation and specific loudness. The Practical Method of loudness estimation is a simplification of the Detailed Model and can be used to estimate the relative loudness of any multi-electrode pulsatile stimuli without the need to model excitation at the cochlear level. Clinical applications of these models to novel sound processing strategies are described.

19.
Hear Res ; 377: 24-33, 2019 06.
Article in English | MEDLINE | ID: mdl-30884368

ABSTRACT

Cochlear implant users require fitting of electrical threshold and comfort levels for optimal access to sound. In this study, we used single-channel cortical auditory evoked responses (CAEPs) obtained from 20 participants using a Nucleus device. A fully objective method to estimate threshold levels was developed, using growth function fitting and the peak phase-locking value feature. Results demonstrated that growth function fitting is a viable method for estimating threshold levels in cochlear implant users, with a strong correlation (r = 0.979, p < 0.001) with behavioral thresholds. Additionally, we compared the threshold estimates using CAEPs acquired from a standard montage (Cz to mastoid) against using a montage of recording channels near the cochlear implant, simulating recording from the device itself. The correlation between estimated and behavioural thresholds remained strong (r = 0.966, p < 0.001), however the recording time needed to be increased to produce a similar estimate accuracy. Finally, a method for estimating comfort levels was investigated, and showed that the comfort level estimates were mildly correlated with behavioral comfort levels (r = 0.50, p = 0.024).


Subject(s)
Auditory Threshold , Cochlear Implantation/instrumentation , Cochlear Implants , Electroencephalography , Evoked Potentials, Auditory , Loudness Perception , Persons With Hearing Impairments/rehabilitation , Prosthesis Fitting , Acoustic Stimulation , Adult , Aged , Aged, 80 and over , Electric Stimulation , Female , Humans , Male , Middle Aged , Persons With Hearing Impairments/psychology , Predictive Value of Tests , Prosthesis Design , Treatment Outcome
20.
PLoS One ; 14(2): e0212940, 2019.
Article in English | MEDLINE | ID: mdl-30817808

ABSTRACT

Functional near-infrared spectroscopy (fNIRS) is a non-invasive brain imaging technique that measures changes in oxygenated and de-oxygenated hemoglobin concentration and can provide a measure of brain activity. In addition to neural activity, fNIRS signals contain components that can be used to extract physiological information such as cardiac measures. Previous studies have shown changes in cardiac activity in response to different sounds. This study investigated whether cardiac responses collected using fNIRS differ for different loudness of sounds. fNIRS data were collected from 28 normal hearing participants. Cardiac response measures evoked by broadband, amplitude-modulated sounds were extracted for four sound intensities ranging from near-threshold to comfortably loud levels (15, 40, 65 and 90 dB Sound Pressure Level (SPL)). Following onset of the noise stimulus, heart rate initially decreased for sounds of 15 and 40 dB SPL, reaching a significantly lower rate at 15 dB SPL. For sounds at 65 and 90 dB SPL, increases in heart rate were seen. To quantify the timing of significant changes, inter-beat intervals were assessed. For sounds at 40 dB SPL, an immediate significant change in the first two inter-beat intervals following sound onset was found. At other levels, the most significant change appeared later (beats 3 to 5 following sound onset). In conclusion, changes in heart rate were associated with the level of sound with a clear difference in response to near-threshold sounds compared to comfortably loud sounds. These findings may be used alone or in conjunction with other measures such as fNIRS brain activity for evaluation of hearing ability.


Subject(s)
Hearing/physiology , Heart Rate/physiology , Loudness Perception/physiology , Acoustic Stimulation , Adult , Auditory Threshold/physiology , Brain/physiology , Female , Functional Neuroimaging , Heart Sounds/physiology , Humans , Male , Spectroscopy, Near-Infrared , Young Adult
SELECTION OF CITATIONS
SEARCH DETAIL
...